Sparse regularization for least-squares AVP migration
نویسندگان
چکیده
This paper presents least-squares wave equation AVP (Amplitude versus Ray Parameter) migration with non-quadratic regularization. We pose migration as an inverse problem and propose a cost function that makes use of a priori information about the AVP common image gather. In particular, we introduce two regularization goals: smoothness along the ray parameter direction and sparseness in the depth direction. The latter yields a high-resolution AVP gather with robust estimates of amplitude variations with ray parameter. An iterative re-weighted least-squares conjugated gradients algorithm minimizes the cost function. We test the algorithm by solving a multi-channel deconvolution problem and a 2-D wave equation AVP migration problem. We also discuss the difficulties and a potential pitfall of this new imaging scheme. Introduction In seismic exploration, we are often interested in two types of information about an earth model: structural information and rock property parameters. For structural imaging, this is emphasized by improving the spatial resolution. In recent years, much attention has been paid to high-resolution structural imaging algorithms. At the same time, with the development of AVO technologies, geophysicists are more and more interested in amplitude-preserving migration/inversion methods. It has been shown (Nemeth et al., 1999; Duquet et al., 2000; Kuehl, et al., 2002, 2003) that seismic resolution can be improved by inverting the Demigration/Migration kernel and by enforcing a regularization constraint, for example, by introducing smoothness in the solution. However, as the results of these methods show, there are many artifacts present in the solution due to operator mismatch, wavefield sampling and noise. One possible way to further enhance the resolution and attenuate artifacts is by taking advantage of the solution itself. Iteratively using the result as model-space regularization can lead to high-resolution artifact-free seismic images. This idea has been used in many fields of signal processing (Sacchi and Ulrych, 1995; Charbonnier, et al., 1997; Youzwhisen, 2001; Sacchi, et al., 2003; Trad et al., 2003; Downtown and Lines, 2004.). In this paper, we utilize a model-dependent sparse regularization and a modelindependent smoothing regularization for an AVP imaging problem. The first regularization arises from an update of the stacked image, and the second regularization is implemented via a convolutional operator applied to AVP common image gathers along the ray parameter direction. With such regularization strategy, we try to develop an algorithm to simultaneously improve the structural interpretability and amplitude accuracy of seismic images. Methodology Regularization is very important for inversion since it takes advantage of a prior information of the model, and it allows us to efficiently solve ill-posed problems. For example, Kuehl and Sacchi (2002, 2003) showed that applying smoothing regularization in rayparameter direction can help to remove artifacts introduced by missing information, aliasing, noise and operator mismatch. The scheme is based on the minimization of a quadratic cost function. Sacchi et al. (2003) showed that higher solution can be acquired by solving a non-quadratic problem. In this paper we reformulate the cost function for least-squares wave-equation AVP/AVA migration problem as follows: || ) ( || || ) ( || ) ( 2 22 SDm F d Lm W m J λ + − = , (1) where m is the earth model, AVP common image gathers, L is a wave-equation based modeling operator that transforms the model to seismic data, d is the seismic data, and W is a sampling matrix to simulate the geometry of data acquisition. D is a model-independent high-pass filter to penalize non-smooth solutions, S is a stacking operator that converts common image gathers to the stacked image, F is a model-dependent function used to enforce sparseness, and λ is a trade-off parameter to control the amount of regularization . By using Cauchy norm (Sacchi and Ulrych, 1995), the sparse regularization operator F can be formulated as the following:
منابع مشابه
Large-scale Inversion of Magnetic Data Using Golub-Kahan Bidiagonalization with Truncated Generalized Cross Validation for Regularization Parameter Estimation
In this paper a fast method for large-scale sparse inversion of magnetic data is considered. The L1-norm stabilizer is used to generate models with sharp and distinct interfaces. To deal with the non-linearity introduced by the L1-norm, a model-space iteratively reweighted least squares algorithm is used. The original model matrix is factorized using the Golub-Kahan bidiagonalization that proje...
متن کاملLocal regularization assisted orthogonal least squares regression
A locally regularized orthogonal least squares (LROLS) algorithm is proposed for constructing parsimonious or sparse regression models that generalize well. By associating each orthogonal weight in the regression model with an individual regularization parameter, the ability for the orthogonal least squares model selection to produce a very sparse model with good generalization performance is g...
متن کاملTwo strategies for sparse data interpolation
I introduce two strategies to overcome the slow convergence of least squares sparse data interpolation: 1) a 2-D multiscale Laplacian regularization operator, and 2) an explicit quadtree-style upsampling scheme which produces a good initial guess for iterative schemes. The multiscale regularization produces an order-of-magnitude speedup in the interpolation of a sparsely sampled topographical m...
متن کاملA Unified Approach to Model Selection and Sparse Recovery
Model selection and sparse recovery are two important problems for which many regularization methods have been proposed. We study the properties of regularization methods in both problems under the unified framework of regularized least squares with concave penalties. For model selection, we establish conditions under which a regular-ized least squares estimator enjoys a nonasymptotic property,...
متن کاملGroup Sparse RLS Algorithms
Group sparsity is one of the important signal priors for regularization of inverse problems. Sparsity with group structure is encountered in numerous applications. However, despite the abundance of sparsity based adaptive algorithms, attempts at group sparse adaptive methods are very scarce. In this paper we introduce novel Recursive Least Squares (RLS) adaptive algorithms regularized via penal...
متن کامل